7 research outputs found

    Detection of Macula and Recognition of Aged-Related Macular Degeneration in Retinal Fundus Images

    Get PDF
    In aged people, the central vision is affected by Age-Related Macular Degeneration (AMD). From the digital retinal fundus images, AMD can be recognized because of the existence of Drusen, Choroidal Neovascularization (CNV), and Geographic Atrophy (GA). It is time-consuming and costly for the ophthalmologists to monitor fundus images. A monitoring system for automated digital fundus photography can reduce these problems. In this paper, we propose a new macula detection system based on contrast enhancement, top-hat transformation, and the modified Kirsch template method. Firstly, the retinal fundus image is processed through an image enhancement method so that the intensity distribution is improved for finer visualization. The contrast-enhanced image is further improved using the top-hat transformation function to make the intensities level differentiable between the macula and different sections of images. The retinal vessel is enhanced by employing the modified Kirsch's template method. It enhances the vasculature structures and suppresses the blob-like structures. Furthermore, the OTSU thresholding is used to segment out the dark regions and separate the vessel to extract the candidate regions. The dark region and the background estimated image are subtracted from the extracted blood vessels image to obtain the exact location of the macula. The proposed method applied on 1349 images of STARE, DRIVE, MESSIDOR, and DIARETDB1 databases and achieved the average sensitivity, specificity, accuracy, positive predicted value, F1 score, and area under curve of 97.79 %, 97.65 %, 97.60 %, 97.38 %, 97.57 %, and 96.97 %, respectively. Experimental results reveal that the proposed method attains better performance, in terms of visual quality and enriched quantitative analysis, in comparison with eminent state-of-the-art methods

    TTCNN: A Breast Cancer Detection and Classification towards Computer-Aided Diagnosis Using Digital Mammography in Early Stages

    No full text
    Breast cancer is a major research area in the medical image analysis field; it is a dangerous disease and a major cause of death among women. Early and accurate diagnosis of breast cancer based on digital mammograms can enhance disease detection accuracy. Medical imagery must be detected, segmented, and classified for computer-aided diagnosis (CAD) systems to help the radiologists for accurate diagnosis of breast lesions. Therefore, an accurate breast cancer detection and classification approach is proposed for screening of mammograms. In this paper, we present a deep learning system that can identify breast cancer in mammogram screening images using an “end-to-end” training strategy that efficiently uses mammography images for computer-aided breast cancer recognition in the early stages. First, the proposed approach implements the modified contrast enhancement method in order to refine the detail of edges from the source mammogram images. Next, the transferable texture convolutional neural network (TTCNN) is presented to enhance the performance of classification and the energy layer is integrated in this work to extract the texture features from the convolutional layer. The proposed approach consists of only three layers of convolution and one energy layer, rather than the pooling layer. In the third stage, we analyzed the performance of TTCNN based on deep features of convolutional neural network models (InceptionResNet-V2, Inception-V3, VGG-16, VGG-19, GoogLeNet, ResNet-18, ResNet-50, and ResNet-101). The deep features are extracted by determining the best layers which enhance the classification accuracy. In the fourth stage, by using the convolutional sparse image decomposition approach, all the extracted feature vectors are fused and, finally, the best features are selected by using the entropy controlled firefly method. The proposed approach employed on DDSM, INbreast, and MIAS datasets and attained the average accuracy of 97.49%. Our proposed transferable texture CNN-based method for classifying screening mammograms has outperformed prior methods. These findings demonstrate that automatic deep learning algorithms can be easily trained to achieve high accuracy in diverse mammography images, and can offer great potential to improve clinical tools to minimize false positive and false negative screening mammography results

    TTCNN: A Breast Cancer Detection and Classification towards Computer-Aided Diagnosis Using Digital Mammography in Early Stages

    No full text
    Breast cancer is a major research area in the medical image analysis field; it is a dangerous disease and a major cause of death among women. Early and accurate diagnosis of breast cancer based on digital mammograms can enhance disease detection accuracy. Medical imagery must be detected, segmented, and classified for computer-aided diagnosis (CAD) systems to help the radiologists for accurate diagnosis of breast lesions. Therefore, an accurate breast cancer detection and classification approach is proposed for screening of mammograms. In this paper, we present a deep learning system that can identify breast cancer in mammogram screening images using an “end-to-end” training strategy that efficiently uses mammography images for computer-aided breast cancer recognition in the early stages. First, the proposed approach implements the modified contrast enhancement method in order to refine the detail of edges from the source mammogram images. Next, the transferable texture convolutional neural network (TTCNN) is presented to enhance the performance of classification and the energy layer is integrated in this work to extract the texture features from the convolutional layer. The proposed approach consists of only three layers of convolution and one energy layer, rather than the pooling layer. In the third stage, we analyzed the performance of TTCNN based on deep features of convolutional neural network models (InceptionResNet-V2, Inception-V3, VGG-16, VGG-19, GoogLeNet, ResNet-18, ResNet-50, and ResNet-101). The deep features are extracted by determining the best layers which enhance the classification accuracy. In the fourth stage, by using the convolutional sparse image decomposition approach, all the extracted feature vectors are fused and, finally, the best features are selected by using the entropy controlled firefly method. The proposed approach employed on DDSM, INbreast, and MIAS datasets and attained the average accuracy of 97.49%. Our proposed transferable texture CNN-based method for classifying screening mammograms has outperformed prior methods. These findings demonstrate that automatic deep learning algorithms can be easily trained to achieve high accuracy in diverse mammography images, and can offer great potential to improve clinical tools to minimize false positive and false negative screening mammography results

    Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass SVM

    No full text
    Background and Objectives: Clinical diagnosis has become very significant in today’s health system. The most serious disease and the leading cause of mortality globally is brain cancer which is a key research topic in the field of medical imaging. The examination and prognosis of brain tumors can be improved by an early and precise diagnosis based on magnetic resonance imaging. For computer-aided diagnosis methods to assist radiologists in the proper detection of brain tumors, medical imagery must be detected, segmented, and classified. Manual brain tumor detection is a monotonous and error-prone procedure for radiologists; hence, it is very important to implement an automated method. As a result, the precise brain tumor detection and classification method is presented. Materials and Methods: The proposed method has five steps. In the first step, a linear contrast stretching is used to determine the edges in the source image. In the second step, a custom 17-layered deep neural network architecture is developed for the segmentation of brain tumors. In the third step, a modified MobileNetV2 architecture is used for feature extraction and is trained using transfer learning. In the fourth step, an entropy-based controlled method was used along with a multiclass support vector machine (M-SVM) for the best features selection. In the final step, M-SVM is used for brain tumor classification, which identifies the meningioma, glioma and pituitary images. Results: The proposed method was demonstrated on BraTS 2018 and Figshare datasets. Experimental study shows that the proposed brain tumor detection and classification method outperforms other methods both visually and quantitatively, obtaining an accuracy of 97.47% and 98.92%, respectively. Finally, we adopt the eXplainable Artificial Intelligence (XAI) method to explain the result. Conclusions: Our proposed approach for brain tumor detection and classification has outperformed prior methods. These findings demonstrate that the proposed approach obtained higher performance in terms of both visually and enhanced quantitative evaluation with improved accuracy

    Success or failure identification for GitHub's open source projects

    No full text
    In this research we have tried to identify successful and unsuccessful projects on GitHub from a sample of 5000 randomly picked projects in a number of randomly selected languages (Java, PHP, JavaScript, C#/C++, HTML). We have selected 1000 projects for each of these languages through the publicly available GitHub API, refined our dataset, and applied different machine learning algorithms to achieve our aim. We initially implemented numerous queries against the dataset and found meaningful relationships and correlations between some of the fetched attributes which have an effect on the popularity of these projects. Later we could develop an application that will determine the success or failure of a specific open source project

    ETISTP: An Enhanced Model for Brain Tumor Identification and Survival Time Prediction

    No full text
    Technology-assisted diagnosis is increasingly important in healthcare systems. Brain tumors are a leading cause of death worldwide, and treatment plans rely heavily on accurate survival predictions. Gliomas, a type of brain tumor, have particularly high mortality rates and can be further classified as low- or high-grade, making survival prediction challenging. Existing literature provides several survival prediction models that use different parameters, such as patient age, gross total resection status, tumor size, or tumor grade. However, accuracy is often lacking in these models. The use of tumor volume instead of size may improve the accuracy of survival prediction. In response to this need, we propose a novel model, the enhanced brain tumor identification and survival time prediction (ETISTP), which computes tumor volume, classifies it into low- or high-grade glioma, and predicts survival time with greater accuracy. The ETISTP model integrates four parameters: patient age, survival days, gross total resection (GTR) status, and tumor volume. Notably, ETISTP is the first model to employ tumor volume for prediction. Furthermore, our model minimizes the computation time by allowing for parallel execution of tumor volume computation and classification. The simulation results demonstrate that ETISTP outperforms prominent survival prediction models
    corecore